Convergence guarantees for generalized adaptive stochastic search methods for continuous global optimization

نویسنده

  • Rommel G. Regis
چکیده

Rommel G. Regis Mathematics Department Saint Joseph’s University, Philadelphia, PA 19131, USA, [email protected] June 23, 2010 This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hybrid Probabilistic Search Methods for Simulation Optimization

Discrete-event simulation based optimization is the process of finding the optimum design of a stochastic system when the performance measure(s) could only be estimated via simulation. Randomness in simulation outputs often challenges the correct selection of the optimum. We propose an algorithm that merges Ranking and Selection procedures with a large class of random search methods for continu...

متن کامل

Department of Systems Engineering & Operation Research

We propose a randomized search method called Stochastic Model Reference Adaptive Search (SMRAS) for solving stochastic optimization problems in situations where the objective functions cannot be evaluated exactly, but can be estimated with some noise (or uncertainty), e.g., via simulation. The method is a generalization of the recently proposed Model Reference Adaptive Search (MRAS) method for ...

متن کامل

On the Convergence of Adaptive Stochastic Search Methods for Constrained and Multi-objective Black-Box Optimization

Stochastic search methods for global optimization and multi-objective optimization are widely used in practice, especially on problems with black-box objective and constraint functions. Although there are many theoretical results on the convergence of stochastic search methods, relatively few deal with black-box constraints and multiple black-box objectives and previous convergence analyses req...

متن کامل

DISCRETE SIZE AND DISCRETE-CONTINUOUS CONFIGURATION OPTIMIZATION METHODS FOR TRUSS STRUCTURES USING THE HARMONY SEARCH ALGORITHM

Many methods have been developed for structural size and configuration optimization in which cross-sectional areas are usually assumed to be continuous. In most practical structural engineering design problems, however, the design variables are discrete. This paper proposes two efficient structural optimization methods based on the harmony search (HS) heuristic algorithm that treat both discret...

متن کامل

Homotopy Analysis for Tensor PCA

Developing efficient and guaranteed nonconvex algorithms has been an important challenge in modern machine learning. Algorithms with good empirical performance such as stochastic gradient descent often lack theoretical guarantees. In this paper, we analyze the class of homotopy or continuation methods for global optimization of nonconvex functions. These methods start from an objective function...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • European Journal of Operational Research

دوره 207  شماره 

صفحات  -

تاریخ انتشار 2010